ai and ethics
Responsible Artificial Intelligence: A Structured Literature Review
Goellner, Sabrina, Tropmann-Frick, Marina, Brumen, Bostjan
Our research endeavors to advance the concept of responsible artificial intelligence (AI), a topic of increasing importance within EU policy discussions. The EU has recently issued several publications emphasizing the necessity of trust in AI, underscoring the dual nature of AI as both a beneficial tool and a potential weapon. This dichotomy highlights the urgent need for international regulation. Concurrently, there is a need for frameworks that guide companies in AI development, ensuring compliance with such regulations. Our research aims to assist lawmakers and machine learning practitioners in navigating the evolving landscape of AI regulation, identifying focal areas for future attention. This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI. Through a structured literature review, we elucidate the current understanding of responsible AI. Drawing from this analysis, we propose an approach for developing a future framework centered around this concept. Our findings advocate for a human-centric approach to Responsible AI. This approach encompasses the implementation of AI methods with a strong emphasis on ethics, model explainability, and the pillars of privacy, security, and trust.
- North America > United States > New York > New York County > New York City (0.05)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > China (0.04)
- (24 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Social Sector (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- (4 more...)
A Vision for Operationalising Diversity and Inclusion in AI
Bano, Muneera, Zowghi, Didar, Gervasi, Vincenzo
The growing presence of Artificial Intelligence (AI) in various sectors necessitates systems that accurately reflect societal diversity. This study seeks to envision the operationalization of the ethical imperatives of diversity and inclusion (D&I) within AI ecosystems, addressing the current disconnect between ethical guidelines and their practical implementation. A significant challenge in AI development is the effective operationalization of D&I principles, which is critical to prevent the reinforcement of existing biases and ensure equity across AI applications. This paper proposes a vision of a framework for developing a tool utilizing persona-based simulation by Generative AI (GenAI). The approach aims to facilitate the representation of the needs of diverse users in the requirements analysis process for AI software. The proposed framework is expected to lead to a comprehensive persona repository with diverse attributes that inform the development process with detailed user narratives. This research contributes to the development of an inclusive AI paradigm that ensures future technological advances are designed with a commitment to the diverse fabric of humanity.
- Oceania > Australia (0.14)
- Europe > Middle East > Malta > Northern Region > Western District > Attard (0.04)
- Europe > Italy (0.04)
- Education (0.93)
- Health & Medicine > Therapeutic Area (0.69)
Values in AI: bioethics and the intentions of machines and people - AI and Ethics
Artificial intelligence has the potential to impose the values of its creators on its users, those affected by it, and society. Users also may mean to use a technological device in an illicit or unexpected way. Devices change people's intentions as they are empowered by technology. What people mean to do with the help of technology reflects their choices, preferences, and values. Technology is a disruptor that impacts society as a whole.
- Law > Health Law (0.40)
- Health & Medicine > Government Relations & Public Policy (0.40)
Researchers develop a new way to see how people feel about artificial intelligence
People in Japan, the U.S. and Germany show different concerns regarding artificial intelligence (AI) being used in entertainment, shopping services, or to help find criminals, reports a new study in AI and Ethics. Japanese people tended to report more concern in AI used to fight crime, while Germans and Americans tended to report more concern over the ethical and social aspects of using AI in entertainment, according to the study. "We found there is a difference in the AI and ELSI [ethics, legal, and social issues] levels of understanding between countries. I think it will become important to carry out thorough discussions about the legal and policy issues surrounding AI," said first author and Kanazawa University Associate Professor Yuko Ikkatai. AI is currently being used in a wide range of fields, which has raised positive and negative attitudes in the general public.
- Europe > Germany (0.27)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.26)
- North America > United States (0.16)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.06)
- Law (0.95)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.52)
Operationalising AI governance through ethics-based auditing: an industry case study - AI and Ethics
Ethics-based auditing (EBA) is a structured process whereby an entity's past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA--such as the feasibility and effectiveness of different auditing procedures--have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit.
Why Are We Failing at the Ethics of AI?
As you read this, AI systems and algorithmic technologies are being embedded and scaled far more quickly than existing governance frameworks (i.e., the rules of the road) are evolving. While it is clear that AI systems offer opportunities across various areas of life, what amounts to a responsible perspective on their ethics and governance is yet to be realized. This should be setting off alarm bells across society. The current inability of actors to meaningfully address AI ethics has created a perfect storm: one in which AI is exacerbating existing inequalities while simultaneously creating new systemic issues at a rapid pace. But why hasn't this issue been effectively addressed?
AI and Ethics: Experts Speak about Challenges, Possible Directions
Top executives from The Rockefeller Foundation and Salesforce discussed efforts to build an ethical framework for this powerful emerging technology. I spoke with top executives from The Rockefeller Foundation and Salesforce about efforts to regulate artificial intelligence, with the goal of building a solid ethical framework for the technology's growth.
Is AI good or bad – and who decides?
One of the most frequently cited technology historians, Professor Melvin Kranzberg, was a major proponent of the law of unintended consequences. So much becomes obvious in his original 1986 paper, at the point where he expands on how he coined the first of his own Laws of Technology. "I mean that technology's interaction with the social ecology is such that technical developments frequently have environmental, social and human consequences that go far beyond the immediate purpose of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances." Going further, Kranzberg observed that many technology-related problems arise when "apparently benign" technologies are introduced at scale. Kranzberg died in 1995 and, for him in his time, an example of this phenomenon was DDT – in one context, a pesticide with dangerous side effects; in another, an important weapon to curb the spread of malaria.
- Information Technology (1.00)
- Health & Medicine (1.00)
AI and Ethics -- Operationalising Responsible AI
Zhu, Liming, Xu, Xiwei, Lu, Qinghua, Governatori, Guido, Whittle, Jon
In the last few years, AI continues demonstrating its positive impact on society while sometimes with ethically questionable consequences. Building and maintaining public trust in AI has been identified as the key to successful and sustainable innovation. This chapter discusses the challenges related to operationalizing ethical AI principles and presents an integrated view that covers high-level ethical AI principles, the general notion of trust/trustworthiness, and product/process support in the context of responsible AI, which helps improve both trust and trustworthiness of AI for a wider set of stakeholders.
- Oceania > Australia (0.29)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.68)
- Government > Regional Government (0.68)
The strange bedfellows of AI and ethics
Over the last decade, we have heard a lot of doom-saying about how artificial intelligence (AI) would result in the loss of huge numbers of jobs However, the picture (across both public and private sectors) is now starting to look not only more nuanced but also more positive. A 2017 report from consultancy PWC suggested that embedding AI across all sectors is likely to create thousands of jobs. In the UK, one estimate suggests that it could contribute as much as 5% of GDP within 10 years. That’s not to say that we won’t lose jobs, because we undoubtedly will. However, they will be
- Europe > United Kingdom (0.35)
- North America > Canada (0.05)
- Europe > Ireland (0.05)